107 research outputs found
Learning from Ontology Streams with Semantic Concept Drift
Data stream learning has been largely studied for extracting knowledge
structures from continuous and rapid data records. In the semantic Web, data is
interpreted in ontologies and its ordered sequence is represented as an
ontology stream. Our work exploits the semantics of such streams to tackle the
problem of concept drift i.e., unexpected changes in data distribution, causing
most of models to be less accurate as time passes. To this end we revisited (i)
semantic inference in the context of supervised stream learning, and (ii)
models with semantic embeddings. The experiments show accurate prediction with
data from Dublin and Beijing
Knowledge-based Transfer Learning Explanation
Machine learning explanation can significantly boost machine learning's
application in decision making, but the usability of current methods is limited
in human-centric explanation, especially for transfer learning, an important
machine learning branch that aims at utilizing knowledge from one learning
domain (i.e., a pair of dataset and prediction task) to enhance prediction
model training in another learning domain. In this paper, we propose an
ontology-based approach for human-centric explanation of transfer learning.
Three kinds of knowledge-based explanatory evidence, with different
granularities, including general factors, particular narrators and core
contexts are first proposed and then inferred with both local ontologies and
external knowledge bases. The evaluation with US flight data and DBpedia has
presented their confidence and availability in explaining the transferability
of feature representation in flight departure delay forecasting.Comment: Accepted by International Conference on Principles of Knowledge
Representation and Reasoning, 201
Low-resource Personal Attribute Prediction from Conversation
Personal knowledge bases (PKBs) are crucial for a broad range of applications
such as personalized recommendation and Web-based chatbots. A critical
challenge to build PKBs is extracting personal attribute knowledge from users'
conversation data. Given some users of a conversational system, a personal
attribute and these users' utterances, our goal is to predict the ranking of
the given personal attribute values for each user. Previous studies often rely
on a relative number of resources such as labeled utterances and external data,
yet the attribute knowledge embedded in unlabeled utterances is underutilized
and their performance of predicting some difficult personal attributes is still
unsatisfactory. In addition, it is found that some text classification methods
could be employed to resolve this task directly. However, they also perform not
well over those difficult personal attributes. In this paper, we propose a
novel framework PEARL to predict personal attributes from conversations by
leveraging the abundant personal attribute knowledge from utterances under a
low-resource setting in which no labeled utterances or external data are
utilized. PEARL combines the biterm semantic information with the word
co-occurrence information seamlessly via employing the updated prior attribute
knowledge to refine the biterm topic model's Gibbs sampling process in an
iterative manner. The extensive experimental results show that PEARL
outperforms all the baseline methods not only on the task of personal attribute
prediction from conversations over two data sets, but also on the more general
weakly supervised text classification task over one data set.Comment: Accepted by AAAI'2
BoxEL: Concept and Role Box Embeddings for the Description Logic EL++
Description logic (DL) ontologies extend knowledge graphs (KGs) with
conceptual information and logical background knowledge. In recent years, there
has been growing interest in inductive reasoning techniques for such
ontologies, which promise to complement classical deductive reasoning
algorithms. Similar to KG completion, several existing approaches learn
ontology embeddings in a latent space, while additionally ensuring that they
faithfully capture the logical semantics of the underlying DL. However, they
suffer from several shortcomings, mainly due to a limiting role representation.
We propose BoxEL, which represents both concepts and roles as boxes (i.e.,
axis-aligned hyperrectangles) and demonstrate how it overcomes the limitations
of previous methods. We theoretically prove the soundness of our model and
conduct an extensive experimental evaluation, achieving state-of-the-art
results across a variety of datasets. As part of our evaluation, we introduce a
novel benchmark for subsumption prediction involving both atomic and complex
concepts
Low power predictable memory and processing architectures
Great demand in power optimized devices shows promising economic potential and draws lots of attention in industry and research area. Due to the continuously shrinking CMOS process, not only dynamic power but also static power has emerged as a big concern in power reduction. Other than power optimization, average-case power estimation is quite significant for power budget allocation but also challenging in terms of time and effort. In this thesis, we will introduce a methodology to support modular quantitative analysis in order to estimate average power of circuits, on the basis of two concepts named Random Bag Preserving and Linear Compositionality. It can shorten simulation time and sustain high accuracy, resulting in increasing the feasibility of power estimation of big systems. For power saving, firstly, we take advantages of the low power characteristic of adiabatic logic and asynchronous logic to achieve ultra-low dynamic and static power. We will propose two memory cells, which could run in adiabatic and non-adiabatic mode. About 90% dynamic power can be saved in adiabatic mode when compared to other up-to-date designs. About 90% leakage power is saved. Secondly, a novel logic, named Asynchronous Charge Sharing Logic (ACSL), will be introduced. The realization of completion detection is simplified considerably. Not just the power reduction improvement, ACSL brings another promising feature in average power estimation called data-independency where this characteristic would make power estimation effortless and be meaningful for modular quantitative average case analysis. Finally, a new asynchronous Arithmetic Logic Unit (ALU) with a ripple carry adder implemented using the logically reversible/bidirectional characteristic exhibiting ultra-low power dissipation with sub-threshold region operating point will be presented. The proposed adder is able to operate multi-functionally
Iteratively Learning Embeddings and Rules for Knowledge Graph Reasoning
Reasoning is essential for the development of large knowledge graphs,
especially for completion, which aims to infer new triples based on existing
ones. Both rules and embeddings can be used for knowledge graph reasoning and
they have their own advantages and difficulties. Rule-based reasoning is
accurate and explainable but rule learning with searching over the graph always
suffers from efficiency due to huge search space. Embedding-based reasoning is
more scalable and efficient as the reasoning is conducted via computation
between embeddings, but it has difficulty learning good representations for
sparse entities because a good embedding relies heavily on data richness. Based
on this observation, in this paper we explore how embedding and rule learning
can be combined together and complement each other's difficulties with their
advantages. We propose a novel framework IterE iteratively learning embeddings
and rules, in which rules are learned from embeddings with proper pruning
strategy and embeddings are learned from existing triples and new triples
inferred by rules. Evaluations on embedding qualities of IterE show that rules
help improve the quality of sparse entity embeddings and their link prediction
results. We also evaluate the efficiency of rule learning and quality of rules
from IterE compared with AMIE+, showing that IterE is capable of generating
high quality rules more efficiently. Experiments show that iteratively learning
embeddings and rules benefit each other during learning and prediction.Comment: This paper is accepted by WWW'1
- …